📚 node [[tpu|tpu]]
Welcome! Nobody has contributed anything to 'tpu|tpu' yet. You can:
  • Write something in the document below!
    • There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
  • Write to the Agora from social media.
    • If you follow Agora bot on a supported platform and include the wikilink [[tpu|tpu]] in a post, the Agora will link it here and optionally integrate your writing.
  • Sign up as a full Agora user.
    • As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[2008 09 26 nowpublic mostpublic index for vancouver]]
⥅ related node [[cloud_tpu]]
⥅ related node [[output_layer]]
⥅ related node [[tensor_processing_unit_(tpu)]]
⥅ related node [[tpu]]
⥅ related node [[tpu_chip]]
⥅ related node [[tpu_device]]
⥅ related node [[tpu_master]]
⥅ related node [[tpu_node]]
⥅ related node [[tpu_pod]]
⥅ related node [[tpu_resource]]
⥅ related node [[tpu_slice]]
⥅ related node [[tpu_type]]
⥅ related node [[tpu_worker]]
⥅ related node [[20210318213618 input_output_tables]]
⥅ node [[tpu]] pulled by Agora
⥅ node [[tpu_chip]] pulled by Agora

TPU chip

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A programmable linear algebra accelerator with on-chip high bandwidth memory that is optimized for machine learning workloads. Multiple TPU chips are deployed on a TPU device.

⥅ node [[tpu_device]] pulled by Agora

TPU device

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A printed circuit board (PCB) with multiple TPU chips, high bandwidth network interfaces, and system cooling hardware.

⥅ node [[tpu_master]] pulled by Agora

TPU master

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

The central coordination process running on a host machine that sends and receives data, results, programs, performance, and system health information to the TPU workers. The TPU master also manages the setup and shutdown of TPU devices.

⥅ node [[tpu_node]] pulled by Agora

TPU node

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A TPU resource on Google Cloud Platform with a specific TPU type. The TPU node connects to your VPC Network from a peer VPC network. TPU nodes are a resource defined in the Cloud TPU API.

⥅ node [[tpu_pod]] pulled by Agora

TPU Pod

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A specific configuration of TPU devices in a Google data center. All of the devices in a TPU pod are connected to one another over a dedicated high-speed network. A TPU Pod is the largest configuration of TPU devices available for a specific TPU version.

⥅ node [[tpu_resource]] pulled by Agora

TPU resource

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A TPU entity on Google Cloud Platform that you create, manage, or consume. For example, TPU nodes and TPU types are TPU resources.

⥅ node [[tpu_slice]] pulled by Agora

TPU slice

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A TPU slice is a fractional portion of the TPU devices in a TPU Pod. All of the devices in a TPU slice are connected to one another over a dedicated high-speed network.

⥅ node [[tpu_type]] pulled by Agora

TPU type

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A configuration of one or more TPU devices with a specific TPU hardware version. You select a TPU type when you create a TPU node on Google Cloud Platform. For example, a v2-8 TPU type is a single TPU v2 device with 8 cores. A v3-2048 TPU type has 256 networked TPU v3 devices and a total of 2048 cores. TPU types are a resource defined in the Cloud TPU API.

⥅ node [[tpu_worker]] pulled by Agora

TPU worker

Go back to the [[AI Glossary]]

#TensorFlow #GoogleCloud

A process that runs on a host machine and executes machine learning programs on TPU devices.

📖 stoas
⥱ context